Goto

Collaborating Authors

 classification and regression


Gumbel

Neural Information Processing Systems

This enables individual tasks tofully leverage inductivebiases provided byrelated tasks, therefore improving the overall performance of all tasks. Experimental results demonstrate that the proposed VMTL is able to effectively tackle a variety of challenging multi-task learning settings with limited training data for both classification and regression.



Classification and regression of trajectories rendered as images via 2D Convolutional Neural Networks

Nicolai, Mariaclaudia, Cabini, Raffaella Fiamma, Pizzagalli, Diego Ulisse

arXiv.org Artificial Intelligence

Trajectories can be regarded as time-series of coordinates, typically arising from motile objects. Methods for trajectory classification are particularly important to detect different movement patterns, while methods for regression to compute motility metrics and forecasting. Recent advances in computer vision have facilitated the processing of trajectories rendered as images via artificial neural networks with 2d convolutional layers (CNNs). This approach leverages the capability of CNNs to learn spatial hierarchies of features from images, necessary to recognize complex shapes. Moreover, it overcomes the limitation of other machine learning methods that require input trajectories with a fixed number of points. However, rendering trajectories as images can introduce poorly investigated artifacts such as information loss due to the plotting of coordinates on a discrete grid, and spectral changes due to line thickness and aliasing. In this study, we investigate the effectiveness of CNNs for solving classification and regression problems from synthetic trajectories that have been rendered as images using different modalities. The parameters considered in this study include line thickness, image resolution, usage of motion history (color-coding of the temporal component) and anti-aliasing. Results highlight the importance of choosing an appropriate image resolution according to model depth and motion history in applications where movement direction is critical.


Context-Sensitive Decision Forests for Object Detection Peter Kontschieder 1 Samuel Rota Bulò 2 Antonio Criminisi

Neural Information Processing Systems

In this paper we introduce Context-Sensitive Decision Forests - A new perspective to exploit contextual information in the popular decision forest framework for the object detection problem. They are tree-structured classifiers with the ability to access intermediate prediction (here: classification and regression) information during training and inference time. This intermediate prediction is available for each sample and allows us to develop context-based decision criteria, used for refining the prediction process. In addition, we introduce a novel split criterion which in combination with a priority based way of constructing the trees, allows more accurate regression mode selection and hence improves the current context information. In our experiments, we demonstrate improved results for the task of pedestrian detection on the challenging TUD data set when compared to state-ofthe-art methods.


A Tutorial on the Pretrain-Finetune Paradigm for Natural Language Processing

Wang, Yu

arXiv.org Artificial Intelligence

The pretrain-finetune paradigm represents a transformative approach in natural language processing (NLP). This paradigm distinguishes itself through the use of large pretrained language models, demonstrating remarkable efficiency in finetuning tasks, even with limited training data. This efficiency is especially beneficial for research in social sciences, where the number of annotated samples is often quite limited. Our tutorial offers a comprehensive introduction to the pretrain-finetune paradigm. We first delve into the fundamental concepts of pretraining and finetuning, followed by practical exercises using real-world applications. We demonstrate the application of the paradigm across various tasks, including multi-class classification and regression. Emphasizing its efficacy and user-friendliness, the tutorial aims to encourage broader adoption of this paradigm. To this end, we have provided open access to all our code and datasets. The tutorial is particularly valuable for quantitative researchers in psychology, offering them an insightful guide into this innovative approach.


Building your own Object Detector from scratch with Tensorflow

#artificialintelligence

In this story, we talk about how to build a Deep Learning Object Detector from scratch using TensorFlow. Instead of using a predefined model, we will define each layer in the network and then we will train our model to detect both the object bound box and its class. Finally, we will evaluate the model using IoU metric. TL;DR: need the code right now? Check this colab notebook or this github repository Object Detection is a task concerned in automatically finding semantic objects in an image. Today Object Detectors like YOLO v4/v5 /v7 and v8 achieve state-of-art in terms of accuracy at impressive real time FPS rate.


Bank Customer Churn Prediction Using Machine Learning

#artificialintelligence

This article was published as a part of the Data Science Blogathon. Customer Churn prediction means knowing which customers are likely to leave or unsubscribe from your service. For many companies, this is an important prediction. This is because acquiring new customers often costs more than retaining existing ones. Once you've identified customers at risk of churn, you need to know exactly what marketing efforts you should make with each customer to maximize their likelihood of staying.


Top 10 Machine Learning Algorithms Explained

#artificialintelligence

Linear Regression: For statistical techniques, linear regression is used in which the value of the dependent variable is predicted through independent variables. A relationship is formed by mapping the dependent and independent variable on a line, and that line is called the regression line, which is represented by Y a*X b where Y Dependent variable (for example, weight) X Independent Variable (e.g., height) b Intercept and a slope. Logistic Regression: In logistic regression, we have a lot of data whose classification is done by building an equation. This method is used to find the discrete dependent variable from the set of independent variables. Its goal is to find the best fit set of parameters. In this classifier, each feature is multiplied by a weight, and then all are added.


SparseChem: Fast and accurate machine learning model for small molecules

Arany, Adam, Simm, Jaak, Oldenhof, Martijn, Moreau, Yves

arXiv.org Machine Learning

SparseChem provides fast and accurate machine learning models for biochemical applications. Especially, the package supports very high-dimensional sparse inputs, e.g., millions of features and millions of compounds. It is possible to train classification, regression and censored regression models, or combination of them from command line. Additionally, the library can be accessed directly from Python. Source code and documentation is freely available under MIT License on GitHub.


A Tour of Machine Learning Algorithms

#artificialintelligence

In this post, we will take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms in the field to get a feeling of what methods are available. There are so many algorithms that it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit. I want to give you two ways to think about and categorize the algorithms you may come across in the field. Both approaches are useful, but we will focus in on the grouping of algorithms by similarity and go on a tour of a variety of different algorithm types.